彩票假设引发了通过识别大型随机初始化神经网络的稀疏子网来实现结构学习的修剪算法的快速发展。这些“胜利门票”的存在理论上已被证明,但在次优稀疏水平。当代修剪算法还在努力确定复杂的学习任务的稀疏彩票票。这个次优稀疏仅仅是存在证明和算法的文物还是修剪方法的一般限制?并且,如果存在非常稀疏的罚单,则当前算法是能够找到它们的当前算法,或者是实现有效网络压缩所需的进一步改进吗?为了系统地回答这些问题,我们推导了一个框架来植物并隐藏大型随机初始化的神经网络中的目标架构。对于机器学习中的三个共同挑战,我们手工制作极其稀疏的网络拓扑,将它们植入大型神经网络,并评估最先进的彩票修剪方法。我们发现,修剪算法的当前局限性识别极其稀疏的票证是算法的,而不是基本的性质,并且预期我们的种植框架将促进有效修剪算法的未来发展,因为我们已经解决了所提出的领域缺失基线的问题Frankle等人。
translated by 谷歌翻译
强大的彩票票证假设有希望,即修剪随机初始化的深神经网络可以为具有随机梯度下降的深度学习提供计算有效的替代方案。但是,常见的参数初始化方案和存在证明集中在偏差为零的网络上,因此预言了修剪的潜在通用近似属性。为了填补这一空白,我们将多个初始化方案和存在证明扩展到非零偏差,包括Relu激活函数的显式“外观线性”方法。这些不仅可以实现真正的正交参数初始化,还可以减少潜在的修剪错误。在标准基准数据的实验中,我们进一步强调了非零偏置初始化方案的实际好处,并为最先进的强彩票修剪提供了理论上灵感的扩展。
translated by 谷歌翻译
最先进的深度学习方法在许多任务上实现了类似人类的表现,但仍会犯错。用易于解释的术语表征这些错误,可以深入了解分类器是否容易出现系统错误,但也提供了一种行动和改善分类器的方法。我们建议发现与正确响应密切相关的那些特征值组合(即模式)。错误的预测,以获取任意分类器的全局和可解释的描述。我们证明这是更通用的标签描述问题的实例,我们根据最小描述长度原理提出了这一点。要发现一个良好的模式集,我们开发了有效的前提算法。通过大量的实验,我们表明它在合成数据和现实世界中的实践中表现出色。与现有的解决方案不同,即使在许多功能上的高度不平衡数据上,它也可以恢复地面真相模式。通过两个有关视觉问题答案和命名实体识别的案例研究,我们确认前提可以清楚且可行的见解对现代NLP分类器的系统错误。
translated by 谷歌翻译
联合学习允许多方协作,在不共享本地数据的情况下协作培训联合模型。这使得机器学习在固有的分布式的,诸如医疗领域中的固有分布式的未差异数据的设置中的应用。在实践中,通常通过聚合当地模型来实现联合培训,其中当地培训目标必须与联合(全球)目标相似。然而,通常,当地数据集是如此之小,即当地目标从全球目标差异很大,导致联合学习失败。我们提出了一种新的方法,它与本地模型的排列交织在一起。排列将每个本地模型暴露给当地数据集的菊花链,导致数据稀疏域中的更有效培训。这使得能够培训极小的本地数据集,例如跨医院的患者数据,同时保留联合学习的培训效率和隐私效益。
translated by 谷歌翻译
The analysis of network structure is essential to many scientific areas, ranging from biology to sociology. As the computational task of clustering these networks into partitions, i.e., solving the community detection problem, is generally NP-hard, heuristic solutions are indispensable. The exploration of expedient heuristics has led to the development of particularly promising approaches in the emerging technology of quantum computing. Motivated by the substantial hardware demands for all established quantum community detection approaches, we introduce a novel QUBO based approach that only needs number-of-nodes many qubits and is represented by a QUBO-matrix as sparse as the input graph's adjacency matrix. The substantial improvement on the sparsity of the QUBO-matrix, which is typically very dense in related work, is achieved through the novel concept of separation-nodes. Instead of assigning every node to a community directly, this approach relies on the identification of a separation-node set, which -- upon its removal from the graph -- yields a set of connected components, representing the core components of the communities. Employing a greedy heuristic to assign the nodes from the separation-node sets to the identified community cores, subsequent experimental results yield a proof of concept. This work hence displays a promising approach to NISQ ready quantum community detection, catalyzing the application of quantum computers for the network structure analysis of large scale, real world problem instances.
translated by 谷歌翻译
Efficient surrogate modelling is a key requirement for uncertainty quantification in data-driven scenarios. In this work, a novel approach of using Sparse Random Features for surrogate modelling in combination with self-supervised dimensionality reduction is described. The method is compared to other methods on synthetic and real data obtained from crashworthiness analyses. The results show a superiority of the here described approach over state of the art surrogate modelling techniques, Polynomial Chaos Expansions and Neural Networks.
translated by 谷歌翻译
In the era of noisy intermediate scale quantum devices, variational quantum circuits (VQCs) are currently one of the main strategies for building quantum machine learning models. These models are made up of a quantum part and a classical part. The quantum part is given by a parametrization $U$, which, in general, is obtained from the product of different quantum gates. By its turn, the classical part corresponds to an optimizer that updates the parameters of $U$ in order to minimize a cost function $C$. However, despite the many applications of VQCs, there are still questions to be answered, such as for example: What is the best sequence of gates to be used? How to optimize their parameters? Which cost function to use? How the architecture of the quantum chips influences the final results? In this article, we focus on answering the last question. We will show that, in general, the cost function will tend to a typical average value the closer the parameterization used is from a $2$-design. Therefore, the closer this parameterization is to a $2$-design, the less the result of the quantum neural network model will depend on its parametrization. As a consequence, we can use the own architecture of the quantum chips to defined the VQC parametrization, avoiding the use of additional swap gates and thus diminishing the VQC depth and the associated errors.
translated by 谷歌翻译
Recent trends in language modeling have focused on increasing performance through scaling, and have resulted in an environment where training language models is out of reach for most researchers and practitioners. While most in the community are asking how to push the limits of extreme computation, we ask the opposite question: How far can we get with a single GPU in just one day? We investigate the downstream performance achievable with a transformer-based language model trained completely from scratch with masked language modeling for a single day on a single consumer GPU. Aside from re-analyzing nearly all components of the pretraining pipeline for this scenario and providing a modified pipeline with performance close to BERT, we investigate why scaling down is hard, and which modifications actually improve performance in this scenario. We provide evidence that even in this constrained setting, performance closely follows scaling laws observed in large-compute settings. Through the lens of scaling laws, we categorize a range of recent improvements to training and architecture and discuss their merit and practical applicability (or lack thereof) for the limited compute setting.
translated by 谷歌翻译
This short paper discusses continually updated causal abstractions as a potential direction of future research. The key idea is to revise the existing level of causal abstraction to a different level of detail that is both consistent with the history of observed data and more effective in solving a given task.
translated by 谷歌翻译
State-of-the-art poetry generation systems are often complex. They either consist of task-specific model pipelines, incorporate prior knowledge in the form of manually created constraints or both. In contrast, end-to-end models would not suffer from the overhead of having to model prior knowledge and could learn the nuances of poetry from data alone, reducing the degree of human supervision required. In this work, we investigate end-to-end poetry generation conditioned on styles such as rhyme, meter, and alliteration. We identify and address lack of training data and mismatching tokenization algorithms as possible limitations of past attempts. In particular, we successfully pre-train and release ByGPT5, a new token-free decoder-only language model, and fine-tune it on a large custom corpus of English and German quatrains annotated with our styles. We show that ByGPT5 outperforms other models such as mT5, ByT5, GPT-2 and ChatGPT, while also being more parameter efficient and performing favorably compared to humans. In addition, we analyze its runtime performance and introspect the model's understanding of style conditions. We make our code, models, and datasets publicly available.
translated by 谷歌翻译